158 research outputs found

    Optimal VDC service provisioning in optically interconnected disaggregated data centers

    Get PDF
    ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Virtual data center (VDC) is a key service in modern data center (DC) infrastructures. However, the rigid architecture of traditional servers inside DCs may lead to blocking situations when deploying VDC instances. To overcome this problem, the disaggregated DC paradigm is introduced. In this letter, we present an integer linear programming (ILP) formulation to optimally allocate VDC requests on top of an optically interconnected disaggregated DC infrastructure, aiming to quantify the benefits that such an architecture can bring when compared with traditional server-centric DCs. Moreover, a lightweight simulated annealing-based heuristic is provided for the scenarios where the ILP scalability is challenged. The obtained numerical results reveal the substantial benefits yielded by the resource disaggregation paradigm.Peer ReviewedPostprint (author's final draft

    Lightpath fragmentation for efficient spectrum utilization in dynamic elastic optical networks

    Get PDF
    The spectrum-sliced elastic optical path network (SLICE) architecture has been presented as an efficient solution for flexible bandwidth allocation in optical networks. An homologous problem to the classical Routing and Wavelength Assignment (RWA) arises in such an architecture, called Routing and Spectrum Assignment (RSA). Imposed by current transmission technologies enabling the elastic optical network concept, the spectrum contiguity constraint must be ensured in the RSA problem, meaning that the bandwidth requested by any connection must be allocated over a contiguous portion of the spectrum along the path between source and destination nodes. In a dynamic network scenario, where incoming connections are established and disconnected in a quite random fashion, spectral resources tend to be highly fragmented, preventing the allocation of large contiguous spectrum portions for high data-rate connection requests. As a result, high data-rate connections experience unfairly increased bocking probability in contrast to low data-rate ones. In view of this, the present article proposes a lightpath fragmentation mechanism that makes use of the idle transponders in the source node of a high data-rate connection request to fragment it into multiple low data-rate ones, more easily allocable in the network. Besides, aiming to support such an operation, a light-weight RSA algorithm is also proposed so as to properly allocate the generated lightpath fragments over the spectrum. Benefits of the proposed approach are quantified through extensive simulations, showing drastically reduced high data-rate connection blocking probability compared to a usual contiguous bandwidth allocation, while keeping the performance of low data-rate requests to similar levels.Postprint (author’s final draft

    Experimenting with real application-specific QoS guarantees in a large-scale RINA demonstrator

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper reports the definition, setup and obtained results of the Fed4FIRE + medium experiment ERASER, aimed to evaluate the actual Quality of Service (QoS) guarantees that the clean-slate Recursive InterNetwork Architecture (RINA) can deliver to heterogeneous applications at largescale. To this goal, a 37Node 5G metro/regional RINA network scenario, spanning from the enduser to the server where applications run in a datacenter has been configured in the Virtual Wall experimentation facility. This scenario has initially been loaded with synthetic application traffic flows, with diverse QoS requirements, thus reproducing different network load conditions. Next,their experienced QoS metrics endtoend have been measured with two different QTAMux (i.e., the most accepted candidate scheduling policy for providing RINA with its QoS support) deployment scenarios. Moreover, on this RINA network scenario loaded with synthetic application traffic flows, a real HD (1080p) video streaming demonstration has also been conducted, setting up video streaming sessions to endusers at different network locations, illustrating the perceived Quality of Experience (QoE). Obtained results in ERASER disclose that, by appropriately deploying and configuring QTAMux, RINA can yield effective QoS support, which has provided perfect QoE in almost all locations in our demo when assigning video traffic flows the highest (i.e., Gold) QoS Cube.Peer ReviewedPostprint (author's final draft

    Cost-efficient virtual optical network embedding for manageable inter-data-center connectivity

    Get PDF
    Network virtualization opens the door to novel infrastructure services offering connectivity and node manageability. In this letter, we focus on the cost-efficient embedding of on-demand virtual optical network requests for interconnecting geographically distributed data centers. We present a mixed integer linear programming formulation that introduces flexibility in the virtual-physical node mapping to optimize the usage of the underlying physical resources. Illustrative results show that flexibility in the node mapping can reduce the number of add-drop ports required to serve the offered demands by 40%.Peer ReviewedPostprint (published version

    Route, modulation format, MIMO and spectrum assignment in Flex-Grid/MCF transparent optical core networks

    Get PDF
    In this paper, we target an optimal multiple-input multiple-output digital signal processing (MIMO-DSP) assignment to super-channels affected by intercore crosstalk (ICXT) in multicore fiber (MCF) enabled transparent optical core networks. MIMO-DSP undoes ICXT effects, but can be costly with high core density MCFs. Hence, its implementation in the network must be carefully decided. We address our objective as a joint route, modulation format, MIMO and spectrum assignment (RMMSA) problem, for which integer linear programming formulations are provided to optimally solve it in small network scenarios. Moreover, several heuristic approaches are also proposed to solve large-scale problem instances with good accuracy. Their goal is to minimize both network spectral requirements and the amount of MIMO equalized super-channels, taking a crosstalk-free space division multiplexing (SDM) solution as a reference, for example, based on parallel single mode fibers [i.e., a multifiber (MF) scenario]. For our evaluation, we consider several state-of-the-art MCF prototypes and different network topologies. The obtained results, with the considered MCFs, disclose that in national backbone networks, the desirable percentage of super-channels with MIMO equalization to match the performance of an equivalent crosstalk-free SDM solution ranges from 0% to 36, while in continental-wide networks this range raises from 0% to 56%. In addition, in the case of a nonideal MIMO (with a 3 dB/km of crosstalk compensation), such percentages range from 0% to 28% and from 0% to 45% in national and continental-wide backbone networks, respectively, experimenting a performance gap up to 12% with respect to the MF reference scenario.Peer ReviewedPostprint (author's final draft

    Evaluation of probabilistic constellation shaping performance in Flex Grid over multicore fiber dynamic optical backbone networks [Invited]

    Get PDF
    In this paper, we present a worst-case methodology for estimating the attainable spectral efficiency over end-to-end paths across a Flex Grid over multicore fiber (MCF) optical network. This methodology accounts for physical link noise, as well as for the signal-to-noise ratio in the Add module (SNR TX ) of spatial-division-multiplexing-enabled reconfigurable optical add and drop multiplexers (SDM-ROADMs), introducing a dominant noise contribution over that of their Bypass and Drop modules. The proposed methodology is subsequently used to quantify the benefits that probabilistic constellation shaping (PCS) can bring to Flex-Grid/MCF dynamic optical backbone networks, compared to using traditional polarization-multiplexed modulation formats. In a first step, insight is provided into the spectral efficiency attainable along the precomputed end-to-end paths in two reference backbone networks, either using PCS or traditional modulation formats. Moreover, in each one of these networks, two SNR TX values are identified: the SNR TX yielding the maximum average paths’ spectral efficiency, as well as an SNR TX that, although slightly degrading the average paths’ spectral efficiency (by 10%), would yet enable a cost-effective SDM-ROADM Add module implementation. Extensive simulations are conducted to analyze PCS offered load gains under 1% bandwidth blocking probability. Furthermore, the study lastly focuses on finding out whether lower fragmentation levels in Flex-Grid/MCF dynamic optical backbone networks can push PCS benefits even further.Funding: Agencia Estatal de Investigación (PID2020-118011GB-C21, PID2020-118011GB-C22, RED2018-102585-T).Peer ReviewedPostprint (author's final draft

    On the benefits of resource disaggregation for virtual data centre provisioning in optical data centres

    Get PDF
    Virtual Data Centre (VDC) allocation requires the provisioning of both computing and network resources. Their joint provisioning allows for an optimal utilization of the physical Data Centre (DC) infrastructure resources. However, traditional DCs can suffer from computing resource underutilization due to the rigid capacity configurations of the server units, resulting in high computing resource fragmentation across the DC servers. To overcome these limitations, the disaggregated DC paradigm has been recently introduced. Thanks to resource disaggregation, it is possible to allocate the exact amount of resources needed to provision a VDC instance. In this paper, we focus on the static planning of a shared optically interconnected disaggregated DC infrastructure to support a known set of VDC instances to be deployed on top. To this end, we provide optimal and sub-optimal techniques to determine the necessary capacity (both in terms of computing and network resources) required to support the expected set of VDC demands. Next, we quantitatively evaluate the benefits yielded by the disaggregated DC paradigm in front of traditional DC architectures, considering various VDC profiles and Data Centre Network (DCN) topologies.Peer ReviewedPostprint (author's final draft

    Burst contention avoidance schemes in hybrid GMPLS-enabled OBS/OCS optical networks

    Get PDF
    Hybrid optical network architectures, combining benefits of optical circuit and burst switching technologies, become a natural evolution to improve overall network performance while reducing related costs. This paper concentrates on preventive contention avoidance schemes to decrease burst loss probability at the OBS layer of such hybrid network scenarios. Into operation, the proposed solution locally reacts to highly loaded downstream node situations by preventively deflecting bursts through a less loaded neighbor. Two different approaches for disseminating adjacent nodes state information are presented and extensively evaluated. In the first approach, current node state information is propagated downstream in the burst control packet, keeping pace with OBS traffic dynamics. The second approach targets at lower control overhead. In this case, averaged node state statistics are included in the Hello messages of the GMPLS Link Management Protocol (LMP) protocol, which are exchanged between neighboring nodes over the OCS control layer every 150 ms. The obtained results validate the applicability of both approaches. Moreover, they indicate that, depending on the mean burst size, either one or the other approach is favorable.Postprint (published version

    Cognitive science applied to reduce network operation margins

    Get PDF
    This is a post-peer-review, pre-copyedit version of an article published in Photonic Network Communications. The final authenticated version is available online at: https://doi.org/10.1007/s11107-017-0717-9.In an increasingly competitive market environment with smaller product offer differentiation, a continuous maximization of efficiency, while guarantying the quality of the provided services, remains a main objective for any telecom operator. In this work, we address the reduction of the operational costs of the optical transport network as one of the possible fields of action to achieve this aim. We propose to apply cognitive science for reducing these costs, specifically by reducing operation margins. We base our work on the case-based reasoning technique by proposing several new schemes to reduce the operation margins established during the design and commissioning phases of the optical links power budgets. From the obtained results, we find that our cognitive proposal provides a feasible solution allowing significant savings on transmitted power that can reach a 49%. We show that there is a certain dependency on network conditions, achieving higher efficiency in low loaded networks where improvements can raise up to 53%.Peer ReviewedPostprint (author's final draft

    End-user traffic policing for QoS assurance in polyservice RINA networks

    Get PDF
    Looking at the ever-increasing amount of heterogeneous distributed applications supported on current data transport networks, it seems evident that best-effort packet delivery falls short to supply their actual needs. Multiple approaches to Quality of Service (QoS) differentiation have been proposed over the years, but their usage has always been hindered by the rigidness of the TCP/IP-based Internet model, which does not even allow for applications to express their QoS needs to the underlying network. In this context, the Recursive InterNetwork Architecture (RINA) has appeared as a clean-slate network architecture aiming to replace the current Internet based on TCP/IP. RINA provides a well-defined QoS support across layers, with standard means for layers to inform of the different QoS guarantees that they can support. Besides, applications and other processes can express their flow requirements, including different QoS-related measures, like delay and jitter, drop probability or average traffic usage. Greedy end-users, however, tend to request the highest quality for their flows, forcing providers to apply intelligent data rate limitation procedures at the edge of their networks. In this work, we propose a new rate limiting policy that, instead of enforcing limits on a per QoS class basis, imposes limits on several independent QoS dimensions. This offers a flexible traffic control to RINA network providers, while enabling end-users freely managing their leased resources. The performance of the proposed policy is assessed in an experimental RINA network test-bed and its performance compared against other policies, either RINA-specific or adopted from TCP/IP. Results show that the proposed policy achieves an effective traffic control for high QoS traffic classes, while also letting lower QoS classes to take profit of the capacity initially reserved for the former ones when available.Peer ReviewedPostprint (author's final draft
    corecore